Goto

Collaborating Authors

 ai oversight


AI Oversight and Human Mistakes: Evidence from Centre Court

Almog, David, Gauriot, Romain, Page, Lionel, Martin, Daniel

arXiv.org Artificial Intelligence

Powered by the increasing predictive capabilities of machine learning algorithms, artificial intelligence (AI) systems have begun to be used to overrule human mistakes in many settings. We provide the first field evidence this AI oversight carries psychological costs that can impact human decision-making. We investigate one of the highest visibility settings in which AI oversight has occurred: the Hawk-Eye review of umpires in top tennis tournaments. We find that umpires lowered their overall mistake rate after the introduction of Hawk-Eye review, in line with rational inattention given psychological costs of being overruled by AI. We also find that umpires increased the rate at which they called balls in, which produced a shift from making Type II errors (calling a ball out when in) to Type I errors (calling a ball in when out). We structurally estimate the psychological costs of being overruled by AI using a model of rational inattentive umpires, and our results suggest that because of these costs, umpires cared twice as much about Type II errors under AI oversight.


Epic's overhaul of a flawed algorithm shows why AI oversight is a life-or-death issue

#artificialintelligence

Epic, the nation's dominant seller of electronic health records, was bracing for a catastrophe. It was June 2021, and a study about to be published in the Journal of the American Medical Association had found that Epic's artificial intelligence tool to predict sepsis, a deadly complication of infection, was prone to missing cases and flooding clinicians with false alarms. Reporters were clamoring for an explanation. Unlock this article by subscribing to STAT and enjoy your first 30 days free!


This AI attorney says companies need a chief AI officer -- pronto

#artificialintelligence

When Bradford Newman began advocating for more artificial intelligence expertise in the C-suite in 2015, "people were laughing at me," he said. Newman, who leads global law firm Baker McKenzie's machine learning and AI practice in its Palo Alto office, added that when he mentioned the need for companies to appoint a chief AI officer, people typically responded, "What's that?" But as the use of artificial intelligence proliferates across the enterprise, and as issues around AI ethics, bias, risk, regulation and legislation currently swirl throughout the business landscape, the importance of appointing a chief AI officer is clearer than ever, he said. This recognition led to a new Baker McKenzie report, released in March, called "Risky Business: Identifying Blind Spots in Corporate Oversight of Artificial Intelligence." The report surveyed 500 US-based, C-level executives who self-identified as part of the decision-making team responsible for their organization's adoption, use and management of AI-enabled tools. In a press release upon the survey's release, Newman said: "Given the increase in state legislation and regulatory enforcement, companies need to step up their game when it comes to AI oversight and governance to ensure their AI is ethical and protect themselves from liability by managing their exposure to risk accordingly."


Baker McKenzie Survey: As Usage of Artificial Intelligence Proliferates, Companies May Underestimate AI's Business Risks

#artificialintelligence

Companies in the US may be bullish on using artificial intelligence (AI), but many executives are ambivalent about its associated risks – especially when it comes to AI-enabled hiring and people management tools. According to a new survey by the global law firm Baker McKenzie, though 100 percent of senior executives agree there are risks associated with using AI, just 4 percent of respondents consider these risks to be "significant." Three fourths of those surveyed indicate their organization uses AI for key human resources (HR) management and employment functions – for example, recruiting and hiring, performance and promotion, and analyzing employee attendance or productivity trends. The Baker McKenzie survey, Risky Business: Identifying Blind Spots in Corporate Oversight of Artificial Intelligence queried 500 US based C-level executives who self-identified as part of the decision-making team responsible for their organization's adoption, use and management of AI-enabled tools. The telephone- and email-based survey was conducted during the months of December 2021 and January 2022, with executives at companies with at least $10.3 billion in annual revenues on average, across a range of industries.


Elon Musk makes clear his stance on self-driving cars, AI oversight, and his 'ad for Mars'

#artificialintelligence

Musk explained while artificial intelligence was a key priority among his various projects, "it's important to have some kind of government oversight."


Elon Musk makes clear his stance on self-driving cars, AI oversight, and his 'ad for Mars'

#artificialintelligence

In an interview with Mathias Döpfner, the CEO of Axel Springer, Business Insider's parent company, Elon Musk revealed his thoughts on self-driving cars, oversight of artificial intelligence, and reasons behind his quest to be buried on Mars. Musk, who had announced in October Tesla's release of a beta version of its long-awaited "full self-driving" software, clarified that he is "definitely not trying to take anyone's steering wheel away from them." "I'm just saying what will most likely occur, and I am certain about this, is that self-driving will become much safer than a human driver. Probably by a factor of 10," he told Döpfner, adding that the bar for whether a person will be able to drive or not will be much more "stringent" in the future when autonomous driving is "10 times safer." But as Business Insider's Graham Rapier reported, the top US safety regulator, the National Highway Traffic Safety Administration, has repeated that "no vehicle available for purchase today is capable of driving itself." "The most advanced vehicle technologies available for purchase today provide driver assistance and require a fully attentive human driver at all times performing the driving task and monitoring the surrounding environment. Abusing these technologies is, at a minimum, distracted driving. Every State in the Nation holds the driver responsible for the safe operation of the vehicle," the agency said.


Why Your Board Needs a Plan for AI Oversight

#artificialintelligence

We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI -- a discussion that's relevant whether organizations are developing AI systems or buying AI-powered software. With the technology in increasingly widespread use, it's time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization's overall mission and risk management. According to McKinsey's 2019 global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations "comprehensively identify and prioritize" the risks associated with AI deployment. Get monthly email updates on how artificial intelligence and big data are affecting the development and execution of strategy in organizations.


Why Your Board Needs a Plan for AI Oversight

#artificialintelligence

We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI -- a discussion that's relevant whether organizations are developing AI systems or buying AI-powered software. With the technology in increasingly widespread use, it's time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization's overall mission and risk management. According to McKinsey's 2019 global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations "comprehensively identify and prioritize" the risks associated with AI deployment. Get monthly email updates on how artificial intelligence and big data are affecting the development and execution of strategy in organizations. Board members recognize that this task is on their agendas: According to the 2019 National Association of Corporate Directors (NACD) Blue Ribbon Commission report, Fit for the Future: An Urgent Imperative for Board Leadership, 86% of board members "fully expect to deepen their engagement with management on new drivers of growth and risk in the next five years."1


Why Your Board Needs a Plan for AI Oversight

#artificialintelligence

We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI -- a discussion that's relevant whether organizations are developing AI systems or buying AI-powered software. With the technology in increasingly widespread use, it's time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization's overall mission and risk management. According to McKinsey's 2019 global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations "comprehensively identify and prioritize" the risks associated with AI deployment. Get monthly email updates on how artificial intelligence and big data are affecting the development and execution of strategy in organizations.


Organizations Are Gearing Up for More Ethical and Responsible Use of AI, Finds Study - insideBIGDATA

#artificialintelligence

A new study shows that business leaders are taking steps to ensure responsible use of artificial intelligence (AI) within their organizations. Most AI adopters – which now account for 72 percent of organizations globally – conduct ethics training for their technologists (70 percent) and have ethics committees in place to review the use of AI (63 percent). AI leaders – organizations rating their deployment of AI "successful" or "highly successful" – also take the lead on responsible AI efforts: Almost all (92 percent) train their technologists in ethics compared to 48 percent of other AI adopters. The findings are based on a global survey among 305 business leaders, more than half of them chief information officers, chief technology offers, and chief analytics officers. The study, "AI Momentum, Maturity and Models for Success," was commissioned by SAS, Accenture Applied Intelligence and Intel, and conducted by Forbes Insights in July 2018. AI now has a real impact on peoples' lives which highlights the importance of having a strong ethical framework surrounding its use, according to the report.